Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
High-throughput Inference
# High-throughput Inference
Qwq 32B INT8 W8A8
Apache-2.0
INT8 quantized version of QWQ-32B, optimized by reducing the bit-width of weights and activations
Large Language Model
Transformers
English
Q
ospatch
590
4
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase